Data Placement Strategy for Hadoop Clusters
نویسنده
چکیده
Wireless technology has become very widely used; and an array of security measures, such as authentication, confidentiality strategies, and 802.11 wireless communication protocol based security schemas have been proposed and applied to real-time wireless networks. However, most of the measures only consider security issues in static mode, in which security levels are all configured when wireless network systems are built. In real-time applications such as a stock quoting and trading system, users may need flexible quality of security that can be, e.g. measured in security levels. For example, the data of a current stock’s price may require higher security than the data of the same stock ten years earlier. Thus flexible security mechanisms for real time applications transmitting packets through wireless networks would be highly desirable. In this paper, we propose a novel security-aware packet scheduling strategy for a real time wireless link where the wireless networks can dynamically set security levels according to different user requests.
منابع مشابه
Adaptive Dynamic Data Placement Algorithm for Hadoop in Heterogeneous Environments
Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a cluster has the same computing capacity and a same workload is assigned to each node. Default Hadoop d...
متن کاملIntelligent Block Placement Strategy in Heterogeneous Hadoop Clusters
MapReduce is an important distributed processing model for large-scale data-intensive applications. As an open-source implementation of MapReduce, Hadoop provides enterprises with a cost-efficient solution for their analytics needs. However, the default HDFS block placement policy assumes that computing nodes in a cluster are homogeneous, and tries to balance load by placing blocks randomly, wh...
متن کاملMorpho: A decoupled MapReduce framework for elastic cloud computing
MapReduce as a service enjoyswide adoption in commercial clouds today [3,23]. Butmost cloud providers just deploy native Hadoop [24] systems on their cloud platforms to provide MapReduce services without any adaptation to these virtualized environments [6,25]. In cloud environments, the basic executing units of data processing are virtual machines. Each user’s virtual cluster needs to deploy HD...
متن کاملAn Improved Data Placement Strategy in a Heterogeneous Hadoop Cluster
Hadoop Distributed File System (HDFS) is designed to store big data reliably, and to stream these data at high bandwidth to user applications. However, the default HDFS block placement policy assumes that all nodes in the cluster are homogeneous, and randomly place blocks without considering any nodes’ resource characteristics, which decreases self-adaptability of the system. In this paper, we ...
متن کاملPerformance Improvement of Map Reduce through Enhancement in Hadoop Block Placement Algorithm
In last few years, a huge volume of data has been produced from multiple sources across the globe. Dealing with such a huge volume of data has arisen the so called “Big data problem”, which can be solved only with new computing paradigms and platforms which lead to Apache Hadoop to come into picture. Inspired by the Google’s private cluster platform, few independent software developers develope...
متن کاملAchieving Load Balancing of HDFS Clusters Using Markov Model
The combination of Hadoop and HDFS is becoming a defacto standard system in handling big data. HDFS is a distributed file system that is designed for big data. In HDFS, a file consists of multiple large sized blocks. A central management of HDFS tries to scatter these multiple blocks on different nodes to maximize the I/O throughput. Hadoop is a framework that supports data intensive parallel a...
متن کامل